47 research outputs found

    Use Case Oriented Medical Visual Information Retrieval & System Evaluation

    Get PDF
    Large amounts of medical visual data are produced daily in hospitals, while new imaging techniques continue to emerge. In addition, many images are made available continuously via publications in the scientific literature and can also be valuable for clinical routine, research and education. Information retrieval systems are useful tools to provide access to the biomedical literature and fulfil the information needs of medical professionals. The tools developed in this thesis can potentially help clinicians make decisions about difficult diagnoses via a case-based retrieval system based on a use case associated with a specific evaluation task. This system retrieves articles from the biomedical literature when querying with a case description and attached images. This thesis proposes a multimodal approach for medical case-based retrieval with focus on the integration of visual information connected to text. Furthermore, the ImageCLEFmed evaluation campaign was organised during this thesis promoting medical retrieval system evaluation

    Foot Recognition Using Deep Learning for Knee Rehabilitation

    Get PDF
    The use of foot recognition can be applied in many medical fields such as the gait pattern analysis and the knee exercises of patients in rehabilitation. Generally, a camera-based foot recognition system is intended to capture a patient image in a controlled room and background to recognize the foot in the limited views. However, this system can be inconvenient to monitor the knee exercises at home. In order to overcome these problems, this paper proposes to use the deep learning method using Convolutional Neural Networks (CNNs) for foot recognition. The results are compared with the traditional classification method using LBP and HOG features with kNN and SVM classifiers. According to the results, deep learning method provides better accuracy but with higher complexity to recognize the foot images from online databases than the traditional classification method

    Overview of the ImageCLEF 2015 medical classification task

    Get PDF
    This articles describes the ImageCLEF 2015 Medical Clas-sification task. The task contains several subtasks that all use a dataset of figures from the biomedical open access literature (PubMed Cen-tral). Particularly compound figures are targeted that are frequent inthe literature. For more detailed information analysis and retrieval it isimportant to extract targeted information from the compound figures.The proposed tasks include compound figure detection (separating com-pound from other figures), multi–label classification (define all sub typespresent), figure separation (find boundaries of the subfigures) and modal-ity classification (detecting the figure type of each subfigure). The tasksare described with the participation of international research groups inthe tasks. The results of the participants are then described and analysedto identify promising techniques

    Generalization Performance of the Deep Learning Models in Neurodegenerative Disease Classification.

    Get PDF
    Over the past decade, machine learning gained considerable attention from the scientific community and has progressed rapidly as a result. Given its ability to detect subtle and complicated patterns, deep learning (DL) has been utilized widely in neuroimaging studies for medical data analysis and automated diagnostics with varying degrees of success. In this paper, we question the remarkable accuracies of the best performing models by assessing generalization performance of the stateof-the-art convolutional neural network (CNN) models on the classification of two most common neurodegenerative diseases, namely Alzheimer’s Disease (AD) and Parkinson’s Disease (PD) using MRI. We demonstrate the impact of the data division strategy on the model performances by comparing the results derived from two different split approaches. We first evaluated the performance of the CNN models by dividing the dataset at the subject level in which all of the MRI slices of a patient are put into either training or test set. We then observed that pooling together all slices prior to applying cross-validation, as erroneously done in a number of previous studies, leads to inflated accuracies by as much as 26% for the classification of the diseases

    Medical Image Retrieval using Bag of Meaningful Visual Words: Unsupervised visual vocabulary pruning with PLSA

    Get PDF
    Content--based medical image retrieval has been proposed as a technique that allows not only for easy access to images from the relevant literature and electronic health records but also for training physicians, for research and clinical decision support. The bag-of-visual-words approach is a widely used technique that tries to shorten the semantic gap by learning meaningful features from the dataset and describing documents and images in terms of the histogram of these features. Visual vocabularies are often redundant, over--complete and noisy. Larger than required vocabularies lead to high--dimensional feature spaces, which present important disadvantages with the curse of dimensionality and computational cost being the most obvious ones. In this work a visual vocabulary pruning technique is presented. It enormously reduces the amount of required words to describe a medical image dataset with no significant effect on the accuracy. Results show that a reduction of up to 90% can be achieved without impact on the system performance. Obtaining a more compact representation of a document enables multimodal description as well as using classifiers requiring low--dimensional representations

    Content–based fMRI Brain Maps Retrieval

    Get PDF
    The statistical analysis of functional magnetic resonance imaging (fMRI) is used to extract functional data of cerebral activation during a given experimental task. It allows for assessing changes in cerebral function related to cerebral activities. This methodology has been widely used and a few initiatives aim to develop shared data resources. Searching these data resources for a specific research goal remains a challenging problem. In particular, work is needed to create a global content–based (CB) fMRI retrieval capability. This work presents a CB fMRI retrieval approach based on the brain activation maps extracted using Probabilistic Independent Component Analysis (PICA). We obtained promising results on data from a variety of experiments which highlight the potential of the system as a tool that provides support for finding hidden similarities between brain activation maps

    Graph Representation for Content–based fMRI Activation Map Retrieval

    Get PDF
    The use of functional magnetic resonance imaging (fMRI) to visualize brain activity in a non–invasive way is an emerging technique in neuroscience. It is expected that data sharing and the development of better search tools for the large amount of existing fMRI data may lead to a better understanding of the brain through the use of larger sample sizes or allowing collaboration among experts in various areas of expertise. In fact, there is a trend toward such sharing of fMRI data, but there is a lack of tools to effectively search fMRI data repositories, a factor which limits further research use of these repositories. Content–based (CB) fMRI brain map retrieval tools may alleviate this problem. A CB–fMRI brain map retrieval tool queries a brain activation map collection (containing brain maps showing activation areas after a stimulus is applied to a subject), and retrieves relevant brain activation maps, i.e. maps that are similar to the query brain activation map. In this work, we propose a graph–based representation for brain activation maps with the goal of improving retrieval accuracy as compared to existing methods. In this brain graph, nodes represent different specialized regions of a functional–based brain atlas. We evaluated our approach using human subject data obtained from eight experiments where a variety of stimuli were applied

    The Parallel Distributed Image Search Engine (ParaDISE)

    Get PDF
    Image retrieval is a complex task that differs according to the context and the user requirements in any specific field, for example in a medical environment. Search by text is often not possible or optimal and retrieval by the visual content does not always succeed in modelling high-level concepts that a user is looking for. Modern image retrieval techniques consists of multiple steps and aim to retrieve information from large–scale datasets and not only based on global image appearance but local features and if possible in a connection between visual features and text or semantics. This paper presents the Parallel Distributed Image Search Engine (ParaDISE), an image retrieval system that combines visual search with text–based retrieval and that is available as open source and free of charge. The main design concepts of ParaDISE are flexibility, expandability, scalability and interoperability. These concepts constitute the system, able to be used both in real–world applications and as an image retrieval research platform. Apart from the architecture and the implementation of the system, two use cases are described, an application of ParaDISE in retrieval of images from the medical literature and a visual feature evaluation for medical image retrieval. Future steps include the creation of an open source community that will contribute and expand this platform based on the existing parts

    Overview of the ImageCLEF 2016 Medical Task

    Get PDF
    ImageCLEF is the image retrieval task of the Conference and Labs of the Evaluation Forum (CLEF). ImageCLEF has historically focused on the multimodal and language–independent retrieval of images. Many tasks are related to image classification and the annotation of image data as well. The medical task has focused more on image retrieval in the beginning and then retrieval and classification tasks in subsequent years. In 2016 a main focus was the creation of meta data for a collection of medical images taken from articles of the the biomedical scientific literature. In total 8 teams participated in the four tasks and 69 runs were submitted. No team participated in the caption prediction task, a totally new task. Deep learning has now been used for several of the ImageCLEF tasks and by many of the participants obtaining very good results. A majority of runs was submitting using deep learning and this follows general trends in machine learning. In several of the tasks multimodal approaches clearly led to best results

    Using Crowdsourcing for Multi-label Biomedical Compound Figure Annotation

    Get PDF
    Information analysis or retrieval for images in the biomedical literature needs to deal with a large amount of compound figures (figures containing several subfigures), as they constitute probably more than half of all images in repositories such as PubMed Central, which was the data set used for the task. The ImageCLEFmed benchmark proposed among other tasks in 2015 and 2016 a multi-label classification task, which aims at evaluating the automatic classification of figures into 30 image types. This task was based on compound figures and thus the figures were distributed to participants as compound figures but also in a separated form. Therefore, the generation of a gold standard was required, so that algorithms of participants can be evaluated and compared. This work presents the process carried out to generate the multi-labels of ∼2650 compound figures using a crowdsourcing approach. Automatic algorithms to separate compound figures into subfigures were used and the results were then validated or corrected via crowdsourcing. The image types (MR, CT, X–ray, ...) were also annotated by crowdsourcing including detailed quality control. Quality control is necessary to insure quality of the annotated data as much as possible. ∼625 h were invested with a cost of ∼870$
    corecore